The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper we consider the combination of two ensemble techniques, both capable of producing diverse binary base classifiers. Adaboost, a version of Boosting is combined with Output Coding for solving multiclass problems. Decision trees are chosen as the base classifiers, and the issue of tree pruning is addressed. Pruning produces less complex trees and sometimes leads to better generalisation...
Bagging and boosting are two popular ensemble methods that achieve better accuracy than a single classifier. These techniques have limitations on massive datasets, as the size of the dataset can be a bottleneck. Voting many classifiers built on small subsets of data (“pasting small votes”) is a promising approach for learning from massive datasets. Pasting small votes can utilize the power of boosting...
In combining classifiers, it is believed that diverse ensembles perform better than non-diverse ones. In order to test this hypothesis, we study the accuracy and diversity of ensembles obtained in bagging and boosting applied to the nearest mean classifier. In our simulation study we consider two diversity measures: the Q statistic and the disagreement measure. The experiments, carried out on four...
The dynamical evolution of weights in the AdaBoost algorithm contains useful information about the rôle that the associated data points play in the built of the AdaBoost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points are ininfluential in the making of the model, while the varying relevance of hard points can be gauged in terms of...
We look at three variants of the boosting algorithm called here Aggressive Boosting, Conservative Boosting and Inverse Boosting. We associate the diversity measure Q with the accuracy during the progressive development of the ensembles, in the hope of being able to detect the point of “paralysis” of the training, if any. Three data sets are used: the artificial Cone-Torus data and the UCI Pima Indian...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.